Convergence of approximate operator methods for eigenvectors

نویسندگان

چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Convergence Analysis for Operator Splitting Methods

We analyze the order of convergence for operator splitting methods applied to conservation laws with stii source terms. We suppose that the source term q(u) is dissipative. It is proved that the L 1 error introduced by the time-splitting can be bounded by O((tkq(u 0)k L 1 (R)), which is an improvement of the O(Qt) upper bound, where t is the splitting time step, Q is the Lipschitz constant of q...

متن کامل

Approximate Solutions of Nonlinear Random Operator Equations: Convergence in Distribution

For nonlinear random operator equations where the distributions of the stochastic inputs are approximated by sequences of random variables converging in distribution and where the underlying deterministic equations are simultaneously approximated, we prove a result about tightness and convergence in distribution of the approximate solutions. We apply our result to a random differential equation...

متن کامل

Approximate Newton Methods and Their Local Convergence

Many machine learning models are reformulated as optimization problems. Thus, it is important to solve a large-scale optimization problem in big data applications. Recently, stochastic second order methods have emerged to attract much attention for optimization due to their efficiency at each iteration, rectified a weakness in the ordinary Newton method of suffering a high cost in each iteratio...

متن کامل

Convergence of iterative methods for solving random operator equations

We discuss the concept of probabilistic quasi-nonexpansive mappings in connection with the mappings of Nishiura. We also prove a result regarding the convergence of the sequence of successive approximations for probabilistic quasi-nonexpansive mappings.

متن کامل

A Unifying Framework for Convergence Analysis of Approximate Newton Methods

Many machine learning models are reformulated as optimization problems. Thus, it is important to solve a large-scale optimization problem in big data applications. Recently, subsampled Newton methods have emerged to attract much attention for optimization due to their efficiency at each iteration, rectified a weakness in the ordinary Newton method of suffering a high cost in each iteration whil...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Bulletin of the Australian Mathematical Society

سال: 1970

ISSN: 0004-9727,1755-1633

DOI: 10.1017/s0004972700045871